Multi Sensor based Biometric System using Image Processing

 

Jaspreet Kaur*, Rajdeep Singh Sohal

Department of ECE, GNDU RC Fattu Dhinga (Sultanpur Lodhi), Kapurthala (Punjab) India.

*Corresponding Author Email: Jaspreet.ecespl@gndu.ac.in

 

ABSTRACT:

The field of Biometrics has received much attention in the last years as being an appealing alternative to conventional substantiation systems like passwords etc. Authentication is required when it is obligatory to know if a person is the same, who they assert to be. Every so often the traits are augmented due to enhancement in number of users which impinge on the database and substantiation system’s performance. Single biometric based systems are excelling as it is very easy and swift system to access the biometric description of a person. Nevertheless with the prologue of safety measures, more threats are being embryonic too in a analogous manner to the security systems. In due course it has turn out to be a key point to work on a multimodal biometric system which can, not only provide superior degree of security but may also endow with the same with greater precision and competence. In this paper a archetype for the same has been anticipated. The focus is set aside on design of a multimodal Biometric system based on teeth, speech and signature recognition.

 

KEYWORDS: PSNR, Entropy, MSE, Steganography, Histogram, Correlation coefficient.

 

 


INTRODUCTION:

The field of Biometrics has received much attention in the last years because it is an out of the run of the mill alternative to conventional authentication systems like passwords etc. Authentication is required when it is necessary to know if a person is whoever he/she claims to be. Sometimes the traits are augmented due to enhancement in number of users which affect the database and authentication system’s performance. Single biometric based systems has achieved distinction as it is very easy and fast system to access the biometric features of a person.  However for the enhancement of security at this level, a combination of different biometrics known as multi biometric system in which two or more biometric traits are combined, has become current area of research. Verification is defined by the process of matching (comparing) a given Biometric data (not stored in a database) with the Biometric reference template (stored in a data- base) of a single person whose identity is being checked to determine whether it matches the enrolee’s template. For example on a computer system, a unique verification token (with direct correspondence to each username) is intended to verify the identity of a legitimate user. All uni-modal biometric systems can be used with combination of other to form a multimodal biometric system. For example

·       Speech and signature

·       Face and iris

·       Fingerprint and hand geometry

·       Speech, signature and face

 

In this paper, a multi sensor based system has been proposed in which speech, signature and teeth of a person are recognized by a logic-condition based algorithm and an enhanced version of password technique using the image steganography, which adds higher level of security while dealing with the identification process.

 

LITERATURE REVIEW:

A lot of work has been done in the field of biometrics security by the researchers to facilitate the people.

Cameron Whitelamet. al. (2013) worked on two different Watermarking and Steganography based methods which were applied on different biometrics data. But these methods were introduced for the purpose to enhance the security of biometric templates in the data base from the unauthorized attackers. [1].

 

Vabhive B. Joshi et. al. (2011) explained a reversible watermarking technique to make the authentication system more secure based on the biometric logic. As well as which given that watermark reversibility in the proposed method ensured that its presence do not affect native biometric authentication. [2].

 

Emanuele Maioranaet. al. (2010) discussed about one other way to give the enhancement in the security of signature templates by applying the cryptosystem on the date base of online signature which show that proposed protected online signature recognize systems which gave the guarantees recognition rates which was totally comparable with those templates which were not secure or which were not protected. So that gave the most reliable signature traits, less affecting the entropy of the employed binary representation which was more deniable to privacy attackers [3].

 

Mandeep Kaur et. al. (2010) introduced the fusion of two different modalities which were speech and signature by which this combination of multimodal increased security and accuracy, yet the complexity of the system increases due to increased number of features extracted out of the multiple samples and was poor in terms of response time as the acquisition time increased. This procedure has shown 95% accurate results and gave minimum false accepted rate and false rejection rate which affect the accuracy of the algorithm [4].

 

Nagesh Kumar et. al. (2010) proposed an efficient multimodal biometric face recognition using speech signal into a new application of plastic surgery. In this technique, speaker identity was correlated with the physiological and behavioural characteristics of the speaker. [5].

 

Seiichi Nakagawa et. al. (2011) suggested amrthod including combination of mel Frequency cepstral coefficient and phase information. As well as it has shown the two different modals which are GMM (Gaussian Mixture Modal) and MFCC (Mel Frequency capstral coefficient) for speaker verification and identification by which it resulted as speaker verification rate about 97.3% to 98.6% with only 0.93 equal error rate. This experimental result showed that the combination of phase information and MFCC was also very affective for noisy speech and using of speaker modals trained by clean speech [6].

 

Eshwarappa M.N. et. al. (2012) presented the combination of three different modalities of speech and signature and handwriting signatures. This is one of the best combinations which could have been used ever. In this paper it also worked on different classifiers which are for feature extraction using DCT and MFCC and using Z-score normalization. As a result, the identification performance is 100% and a verification performance, False Acceptance Rate (FAR) is 0%, and False Rejection Rate (FRR) is 0% [7].

 

Dapinder Kaur et. al. (2013) introduced a feature level fusion in which two different biometric modalities, speech and signature were considered and it was an efficient and robust biometric system which resulted as 98.2% accuracy and false acceptance rate and false rejection rate at 0.9% which became near to accurate results. However some other classifiers are used for feature extraction which was MFCC (Mel. Frequency Cepstral Cofficient) and SIFT (Scale Invariant Feature Transformation) [8].

 

P.B. Kathe et. al. (2014) discussed offline text using scale invariant feature transform descriptor for automated writer recognizer which has methods involving a reduced number of parameters for creation of a robust writer recognition system. As well as in such a case writer identification is been a great arena for development in forensic analysis. In which word segmentation and SIFT are used for feature extraction and the proposed system automated writer recognizer for offline text uses SIFT algorithm to normalize the size of the text. And shown with normalization there are two distances are fused to measure the dissimilarities between two handwriting images [9].

 

Niall A. Fox et. al. (2007) the discussion in this paper is about system using Hidden Markov Modals (HMM) extract the data which had been presented which combine information from three different biometrics information in an automatic unsupervised fusion adapting to the local performance of each expert by which a benefit of a approach described is that audio visual training data is not required to tune the fusion process. As well as in this paper it results as the highlights the complimentary nature of mouth and face experts under clean and noisy test condition and complementary nature of audio and video based data [10].

 

Nikhil Nigam et. al. (2015) proposed Encoded Hybrid Digital Watermarking Scheme (EHDWS) to improve image quality which is based on Discrete Wavelet Transform (DWT), Singular Value Decomposition (SVD) and Bose Chaudhuri Hocquenghem (BCH) code. In this it introduced Images based watermarking technique to enhance the security of biometric template for singular matrix decomposition which was an Encoded hybrid DWT based watermarking scheme. Proposed framework for watermarking scheme which is based on DWT and SVD transforms with BCH code based authentication [11].

 

Vincenzo Conti et. al. (2010) introduced the combination of two different biometric modalities which are Iris and fingerprint and in this features were extracted at different levels and with the fusion technique these features are fused which is a frequency based approach for feature fusion for a multimodal biometric identification system. A frequency-based approach resulted in a homogeneous biometric vector, integrating iris and fingerprint data. As well as it resulted as with 0% FAR and 0.75% FRR. In this paper, a template-level fusion algorithm working on a unified biometric descriptor was presented. As well as in this paper they adopted Log-Gabor-algorithm-based codifier to encode both fingerprint and iris features, thus obtaining a unified template [12].

 

Other than above a number of researchers are working and publishing their fruitful work in this area proposing new models, algorithms and systems. In the same field, a system with three modalities has been proposed in this paper. The methods adopted for the same along with results are given in the following sections.

 

METHODOLOGY AND DATA COLLECTION:

The research has been conceded out using the real time images of teeth and signatures and speech signal. The captured images were taken in such a way that the all shades of text could lie and be predicted on the white background. By following a well-mannered methodology the objectives of the research were achieved. Flow chart given in figure 1, provides a brief about the flow of research.

 

After getting started, the very first step is to get the information to be processed (The test images of signature and teeth) and the information to be used as references (The Database). So a lot of data were collected from the random population. This data has not only been used for the current research but can also be used in the achievements of prospect research goals. Second major problem of Speech Signal was solved by attaining the speech samples recorded in different slangs from different people.

 

 


Figure 1: Overview of methodology adopted


 

The database was generated in MATLAB in harmony with the information collected from the population and internet as well. Here figure 2 and 3 shows the database images of signature and teeth of 3 people respectively:

 


 

RESULTS AND DISCUSSION:

The developed algorithm was tested over a random collection of test sets, belonging to diverse persons. The algorithm has produced very notable results as given on the next page. When the correlation between the two data (From Test Image and From Database) data was calculated, the maximum value observed was to be treated as the best match value and algorithmic results were found to be 100% Accurate in all cases.An entire result table for different inputs along with the parameters for various statistics has been given below accompanied by the percentage efficiency of the algorithm.

 


 

Table 1: Results Corresponding to Various Input Data Combinations

S.

No.

Test Image*

(Signature)

Test Image**

(Teeth)

Test Sample of Speech***

Feature Values corresponding to Signature

Feature Values Corresponding to Teeth

Feature Values Corresponding to Speech

Result

Accuracy

1

Person 1

Person 1

Person 1

STD=0.070

Mean=0.936

Variance=0.004

Entropy=0.341

STD=0.275

Mean=0.565

Variance=0.076

Entropy=0.987

PIT=351.4

RMS=0.075

TEM=180.8

PC=-0.017

ZC=875.5

RO=7333.4

ED=2.76

CD=3638.7

SP=9.85

EN=0.887

Person 1

Identified

TRR=100%

FRR=0%

Person 2

Person 2

NC

NC

NC

Person

Not Recognized

Person 3

Person 3

NC

NC

NC

Person

2

Person 2

Person 1

Person 1

NC

NC

NC

Not Recognized

TRR=100%

FRR=0%

Person 2

Person 2

STD=0.055

Mean=0.956

Variance=0.003

Entropy=0.255

STD=0.189

Mean=0.232

Variance=0.036

Entropy=0.783

PIT=352.8

RMS=0.09

TEM=154.9

PC=-0.006

ZC=914.1

RO=7572.2

ED=3.68

CD=3590.0

SP=10.23

EN=0.881

Person 2

Identified

Person 3

Person 3

NC

NC

NC

Person

3

Person 3

Person 1

Person 1

NC

NC

NC

Not Recognized

TRR=100%

FRR=0%

Person 2

Person 2

NC

NC

NC

Person

Person 3

Person 3

STD=0.089

Mean=0.909

Variance=0.008

Entropy=0.439

STD=0.170

Mean=0.223

Variance=0.029

Entropy=0.767

PIT=361.8

RMS=0.110

TEM=158.6

PC=0.252

ZC=1194.8

RO=7711.5

ED=5.52

CD=3632.5

SP=8.24

EN=0.892

Person 3

Identified

Overall Correlation (Between Test Person Features and Database Person Feature)=1 (100% Match)

TRR=100%

FRR=0%

Overall Accuracy= 100%

Test Image*: Person 1-Fig 2(a), Person 2- Fig 2(b), Person 3- Fig 2(c)

Test Image**: Person 1- Fig 3(a), Person 2- Fig 3(b), Person 3- Fig 3(c)

Test Sample of Speech***: Person 1- Fig (3), Person 2- Fig (4), Person 3- Figure (5)

STD: Standard Deviation, PIT: Pitch, RMS: Root Mean Square Energy, TEM: Tempo, PC: Pulse Clarity, ZC: Zero Cross, RO: Roll Off, ED: Event Density, CD: Centroid, SP: Spread, EN: Entropy, NC: Not Calculated as Inputs are not correct (Not provided by one genuine person), TRR- True Recognition Rate, FRR- False Recognition Rate.

 

 

 


CHALLENGES FACED:

The most noteworthy challenge faced during the work was capturing the eminence images with maximum detail of the persons. It is very typical task to get the image with all the details within a process able memory. Such images are produced through high resolution and thus are of 6-10MB of size. So for this sake initially images were compressed by JPEG encoder.

Second challenge faced was to get rid of illumination conditions, as illumination varies a lot even when the image acquiring time is fixed. However the elucidation to this is variable user defined thresholding value.

 

CONCLUSION AND FUTURE WORK:

In this paper a new algorithm has been proposed and implemented for identification of a person based on biometric information of teeth, speech and signature. At a glance results shows that

1. A simplified, accurate and robust technique.

2. Working on real time environmental conditions.

3. Correlation parameters calculation verifying the accuracy.

4. A novel approach in the field of biometrics.

Checking about future aspects, after verification of the result in different environment conditions, it can be implemented in real time live projects and can be made adaptable/hybrid model with genetic algorithms or neuro-fuzzy optimization techniques.

 

ACKNOWLEDGMENT:

This work is supported by ECE department of Guru Nanak Dev University Regional Campus Fattu Dhinga, Punjab, by providing excellent laboratories (Research Lab) and MATLAB software for the development and testing of the algorithm.

 

REFERENCES:

[1].           Cameron Whitelam, Nnamdi Osia and Thirismachos Bourlai, “Securing Multimodal Biometric Data through Watermarking and steganography” IEEE transactions on pattern analysis and machine intelligence, 978-1-4799-1535-4/13/$31.00 ©2013 IEEE.

[2].           Vabhive B. Joshi, Mehal S. Raval, Suman Mitra, Priti P. Rage and S.K. Parulkar, “Reversible watermarking technique to enhance security of biometric authentication system” Published in IEEE Conference on Multimedia and Expo 2011, IEEE, Melbourne, pp. 1027-1032.

[3].           Emanuele Maiorana and Patrizio Campisi, “Fuzzy commitment for function based signature template protection” IEEE signal processing letters, vol. 17, no.3, March 2010.

[4].           Mandeep Kaur, Akshay Gidhar and Manvjeet Kaur, “Multimodal biometric system using speech and signature modalities” International Journal of Computer Applications (0975 – 8887) Volume 5– No.12, August 2010.

[5].           Nagesh Kumar and M.N. Shanmukha Swamy, “An efficient multimodal biometric face recognition using speech signal” 978-1-4244-8594-9/10/$26.00/ 2010 IEEE.

[6].           Seiichi Nakagawa, Longbiao Wang and Shinji Ohtusaka, “Speaker identification and verification of combining MFCC and phase information” IEEE transaction on audio, speech and language processing, vol.20, no.4, May 2012.

[7].           Eshwarappa M.N. and Dr. Mrityunjaya V. Latte, “Multimodal biometric person authentication using speech, signature and handwriting features” (IJACSA) International Journal of Advanced Computer Science and Applications,  Special Issue on Artificial Intelligence, May 2010.

[8].           Dapinder Kaur, Gaganpreet Kaur and Dheerendra Singh, “Efficient and rebust multimodal biometric system for feature level fusion (speech and signature)” International Journal of Computer Application (0975-8887), Volume 75- No.5, August 2013.

[9].           P.B. Kathe and V.D. Dabhade, “Automated writer recognizer for offline text using scale invariant feature transformation descriptor” International Journal of Computer Applications (0975 – 8887) Innovations and Trends in Computer and Communication Engineering (ITCCE-2014).

[10].        Niall A. Fox, Ralph Gross, Jeffery H. Cohn and Richard B. Rielly, “Robust biometric person identifications using automatic classifier fusion of speech, mouth and face experts” IEEE transaction on multimedia, Vol. 9, No.4, June 2007.

[11].        Nikhil Nigam and Yogendra Kumar Jain, “Encoded hybrid DWT based watermarking scheme based on singular matrix decomposition” International Journal of Computer Applications (0975 – 8887) Volume 110 – No. 14, January 2015.

[12].        Vincenzo Conti, Carmello Militello, Fillipo Scorbello and Salvatore Vitabile, “A frequency based approaches for feature fusion and iris multimodal biometric identification system” IEEE transactions on systems, man, and cybernetics—part c: applications and reviews, vol. 40, no. 4, July 2010.

 

 

 

 

Received on 23.02.2017                                  Accepted on 16.03.2017        

©A&V Publications all right reserved

Research J. Engineering and Tech. 2017; 8(1): 53-62. 

DOI:  10.5958/2321-581X.2017.00009.5